17 research outputs found
NatCSNN: A Convolutional Spiking Neural Network for recognition of objects extracted from natural images
Biological image processing is performed by complex neural networks composed
of thousands of neurons interconnected via thousands of synapses, some of which
are excitatory and others inhibitory. Spiking neural models are distinguished
from classical neurons by being biological plausible and exhibiting the same
dynamics as those observed in biological neurons. This paper proposes a Natural
Convolutional Neural Network (NatCSNN) which is a 3-layer bio-inspired
Convolutional Spiking Neural Network (CSNN), for classifying objects extracted
from natural images. A two-stage training algorithm is proposed using
unsupervised Spike Timing Dependent Plasticity (STDP) learning (phase 1) and
ReSuMe supervised learning (phase 2). The NatCSNN was trained and tested on the
CIFAR-10 dataset and achieved an average testing accuracy of 84.7% which is an
improvement over the 2-layer neural networks previously applied to this
dataset.Comment: 12 page
UAV detection : a STDP trained deep convolutional spiking neural network retina-neuromorphic approach
The Dynamic Vision Sensor (DVS) has many attributes, such as sub-millisecond response time along with a good low light dy- namic range, that allows it to be well suited to the task for UAV De- tection. This paper proposes a system that exploits the features of an event camera solely for UAV detection while combining it with a Spik- ing Neural Network (SNN) trained using the unsupervised approach of Spike Time-Dependent Plasticity (STDP), to create an asynchronous, low power system with low computational overhead. Utilising the unique features of both the sensor and the network, this result in a system that is robust to a wide variety in lighting conditions, has a high temporal resolution, propagates only the minimal amount of information through the network, while training using the equivalent of 43,000 images. The network returns a 91% detection rate when shown other objects and can detect a UAV with less than 1% of pixels on the sensor being used for processing
Bio-Inspired Multi-Layer Spiking Neural Network Extracts Discriminative Features from Speech Signals
Spiking neural networks (SNNs) enable power-efficient implementations due to
their sparse, spike-based coding scheme. This paper develops a bio-inspired SNN
that uses unsupervised learning to extract discriminative features from speech
signals, which can subsequently be used in a classifier. The architecture
consists of a spiking convolutional/pooling layer followed by a fully connected
spiking layer for feature discovery. The convolutional layer of leaky,
integrate-and-fire (LIF) neurons represents primary acoustic features. The
fully connected layer is equipped with a probabilistic spike-timing-dependent
plasticity learning rule. This layer represents the discriminative features
through probabilistic, LIF neurons. To assess the discriminative power of the
learned features, they are used in a hidden Markov model (HMM) for spoken digit
recognition. The experimental results show performance above 96% that compares
favorably with popular statistical feature extraction methods. Our results
provide a novel demonstration of unsupervised feature acquisition in an SNN
Learning precise spike timings with eligibility traces
Recent research in the field of spiking neural networks (SNNs) has shown that
recurrent variants of SNNs, namely long short-term SNNs (LSNNs), can be trained
via error gradients just as effective as LSTMs. The underlying learning method
(e-prop) is based on a formalization of eligibility traces applied to leaky
integrate and fire (LIF) neurons. Here, we show that the proposed approach
cannot fully unfold spike timing dependent plasticity (STDP). As a consequence,
this limits in principle the inherent advantage of SNNs, that is, the potential
to develop codes that rely on precise relative spike timings. We show that
STDP-aware synaptic gradients naturally emerge within the eligibility equations
of e-prop when derived for a slightly more complex spiking neuron model, here
at the example of the Izhikevich model. We also present a simple extension of
the LIF model that provides similar gradients. In a simple experiment we
demonstrate that the STDP-aware LIF neurons can learn precise spike timings
from an e-prop-based gradient signal
Unsupervised Learning of Spatio-Temporal Receptive Fields from an Event-Based Vision Sensor
International audienceNeuromorphic vision sensors exhibit several advantages compared to conventional frame-based cameras including low latencies, high dynamic range, and low data rates. However, how efficient visual representations can be learned from the output of such sensors in an unsupervised fashion is still an open problem. Here we present a spiking neural network that learns spatio-temporal receptive fields in an unsupervised way from the output of a neuromorphic event-based vision sensor. Learning relies on the combination of spike timing-dependent plasticity with different synaptic delays, the homeostatic regulations of synaptic weights and firing thresholds, and fast inhibition among neurons to decorrelate their responses. Our network develops biologically plausible spatio-temporal receptive fields when trained on real world input and is suited for implementation on neuromorphic hardware